Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Public Health Rep ; 136(5): 554-561, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34139910

RESUMEN

OBJECTIVES: Federal open-data initiatives that promote increased sharing of federally collected data are important for transparency, data quality, trust, and relationships with the public and state, tribal, local, and territorial partners. These initiatives advance understanding of health conditions and diseases by providing data to researchers, scientists, and policymakers for analysis, collaboration, and use outside the Centers for Disease Control and Prevention (CDC), particularly for emerging conditions such as COVID-19, for which data needs are constantly evolving. Since the beginning of the pandemic, CDC has collected person-level, de-identified data from jurisdictions and currently has more than 8 million records. We describe how CDC designed and produces 2 de-identified public datasets from these collected data. METHODS: We included data elements based on usefulness, public request, and privacy implications; we suppressed some field values to reduce the risk of re-identification and exposure of confidential information. We created datasets and verified them for privacy and confidentiality by using data management platform analytic tools and R scripts. RESULTS: Unrestricted data are available to the public through Data.CDC.gov, and restricted data, with additional fields, are available with a data-use agreement through a private repository on GitHub.com. PRACTICE IMPLICATIONS: Enriched understanding of the available public data, the methods used to create these data, and the algorithms used to protect the privacy of de-identified people allow for improved data use. Automating data-generation procedures improves the volume and timeliness of sharing data.


Asunto(s)
COVID-19/epidemiología , Centers for Disease Control and Prevention, U.S./organización & administración , Confidencialidad/normas , Anonimización de la Información/normas , Centers for Disease Control and Prevention, U.S./normas , Humanos , Pandemias , SARS-CoV-2 , Estados Unidos/epidemiología
2.
J Med Internet Res ; 22(11): e19597, 2020 11 26.
Artículo en Inglés | MEDLINE | ID: mdl-33177037

RESUMEN

BACKGROUND: De-identifying personal information is critical when using personal health data for secondary research. The Observational Medical Outcomes Partnership Common Data Model (CDM), defined by the nonprofit organization Observational Health Data Sciences and Informatics, has been gaining attention for its use in the analysis of patient-level clinical data obtained from various medical institutions. When analyzing such data in a public environment such as a cloud-computing system, an appropriate de-identification strategy is required to protect patient privacy. OBJECTIVE: This study proposes and evaluates a de-identification strategy that is comprised of several rules along with privacy models such as k-anonymity, l-diversity, and t-closeness. The proposed strategy was evaluated using the actual CDM database. METHODS: The CDM database used in this study was constructed by the Anam Hospital of Korea University. Analysis and evaluation were performed using the ARX anonymizing framework in combination with the k-anonymity, l-diversity, and t-closeness privacy models. RESULTS: The CDM database, which was constructed according to the rules established by Observational Health Data Sciences and Informatics, exhibited a low risk of re-identification: The highest re-identifiable record rate (11.3%) in the dataset was exhibited by the DRUG_EXPOSURE table, with a re-identification success rate of 0.03%. However, because all tables include at least one "highest risk" value of 100%, suitable anonymizing techniques are required; moreover, the CDM database preserves the "source values" (raw data), a combination of which could increase the risk of re-identification. Therefore, this study proposes an enhanced strategy to de-identify the source values to significantly reduce not only the highest risk in the k-anonymity, l-diversity, and t-closeness privacy models but also the overall possibility of re-identification. CONCLUSIONS: Our proposed de-identification strategy effectively enhanced the privacy of the CDM database, thereby encouraging clinical research involving multiple centers.


Asunto(s)
Nube Computacional/normas , Confidencialidad/normas , Anonimización de la Información/normas , Bases de Datos Factuales/normas , Informática Médica/métodos , Humanos
3.
Gac. sanit. (Barc., Ed. impr.) ; 34(5): 521-523, sept.-oct. 2020. graf
Artículo en Español | IBECS | ID: ibc-198877

RESUMEN

Los recientes cambios en la normativa europea de protección de datos de carácter personal siguen permitiendo el uso de los datos sanitarios con fines de investigación, pero establecen la evaluación de impacto en protección de datos como instrumento de reflexión y análisis de riesgos en el proceso de tratamiento de datos. La publicación de una guía facilita la realización de esta evaluación de impacto, aunque no es de aplicación directa para los proyectos de investigación. Se detalla la experiencia en un proyecto concreto, y se muestra cómo el contexto del tratamiento toma relevancia respecto a las características de los datos. La realización de una evaluación de impacto es una oportunidad para asegurar el cumplimiento de los principios de la protección de datos en un entorno cada vez más complejo y con mayores desafíos éticos


Recent changes in European regulations for personal data protection still allow the use of health data for research purposes, but they have set the Impact Assessment on Data Protection as an instrument for reflection and risk analysis in the process of data processing. The publication of a guide for facilitates this impact assessment, although it is not directly applicable to research projects. Experience in a specific project is detailed, showing how the context of the treatment becomes relevant with respect to the data characteristics. Carrying out an impact assessment is an opportunity to ensure compliance with the principles of data protection in an increasingly complex environment with greater ethical challenges


Asunto(s)
Humanos , Seguridad Computacional/tendencias , Investigación Biomédica/métodos , Informe de Investigación/normas , Ética en Investigación , Factor de Impacto , Anonimización de la Información/normas , Data Warehousing/normas
5.
Med Law Rev ; 28(3): 478-501, 2020 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-32413130

RESUMEN

Data sharing has long been a cornerstone of healthcare and research and is only due to become more important with the rise of Big Data analytics and advanced therapies. Cell therapies, for example, rely not only on donated cells but also essentially on donated information to make them traceable. Despite the associated importance of concepts such as 'donor anonymity', the concept of anonymisation remains contentious. The Article 29 Working Party's 2014 guidance on 'Anonymisation Techniques' has perhaps helped encourage a perception that anonymity is the result of data modification 'techniques', rather than a broader process involving management of information and context. In light of this enduring ambiguity, this article advocates a 'relative' understanding of anonymity and supports this interpretation with reference not only to the General Data Protection Regulation but also to European Union health-related legislation, which also alludes to the concept. Anonymity, I suggest, should be understood not as a 'technique' which removes the need for information governance but rather as a legal standard of reasonable risk-management, which can only be satisfied by effective data protection. As such, anonymity can be not so much an alternative to data protection as its mirror, requiring similar safeguards to maintain privacy and confidentiality.


Asunto(s)
Seguridad Computacional/legislación & jurisprudencia , Anonimización de la Información/legislación & jurisprudencia , Anonimización de la Información/normas , Guías como Asunto/normas , Jurisprudencia , Investigación Biomédica , Ensayos Clínicos como Asunto/legislación & jurisprudencia , Confidencialidad , Unión Europea , Privacidad , Donantes de Tejidos/legislación & jurisprudencia
6.
Ethics Hum Res ; 42(2): 13-27, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32233117

RESUMEN

We found no studies in the United States that explored research participants' perspectives about sharing their qualitative data. We present findings from interviews with 30 individuals who participated in sensitive qualitative studies to explore their understanding and concerns regarding qualitative data sharing. The vast majority supported sharing qualitative data so long as their data were deidentified and shared only among researchers. However, they raised concerns about confidentiality if the data were not adequately deidentified and about misuse by secondary users if data were shared beyond the research community. These concerns, though, did not deter them from participating in research. Notably, participants hoped their data would be shared and may have expected or assumed this was already happening. While many could not recollect details about data-sharing plans for studies in which they participated, they trusted researchers and institutions to appropriately handle data sharing. If individuals view data sharing as an extension or integral part of their participation in qualitative research, then researchers may have a stronger obligation to share qualitative data than previously thought. Guidelines and tools to assist researchers and institutional review board members in ethical and responsible qualitative data sharing are urgently needed.


Asunto(s)
Confidencialidad/ética , Anonimización de la Información/normas , Difusión de la Información/ética , Sujetos de Investigación/psicología , Adulto , Femenino , Humanos , Entrevistas como Asunto , Masculino , Persona de Mediana Edad , Investigación Cualitativa , Investigadores/normas , Confianza , Estados Unidos
7.
J Med Syst ; 44(5): 99, 2020 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-32240368

RESUMEN

We propose a de-identification system which runs in a standalone mode. The system takes care of the de-identification of radiation oncology patient's clinical and annotated imaging data including RTSTRUCT, RTPLAN, and RTDOSE. The clinical data consists of diagnosis, stages, outcome, and treatment information of the patient. The imaging data could be the diagnostic, therapy planning, and verification images. Archival of the longitudinal radiation oncology verification images like cone beam CT scans along with the initial imaging and clinical data are preserved in the process. During the de-identification, the system keeps the reference of original data identity in encrypted form. These could be useful for the re-identification if necessary.


Asunto(s)
Anonimización de la Información/normas , Registros Electrónicos de Salud/organización & administración , Oncología por Radiación/organización & administración , Tomografía Computarizada de Haz Cónico/métodos , Registros Electrónicos de Salud/normas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Oncología por Radiación/normas
8.
Trials ; 21(1): 200, 2020 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-32070405

RESUMEN

BACKGROUND: Regulatory agencies, such as the European Medicines Agency and Health Canada, are requiring the public sharing of clinical trial reports that are used to make drug approval decisions. Both agencies have provided guidance for the quantitative anonymization of these clinical reports before they are shared. There is limited empirical information on the effectiveness of this approach in protecting patient privacy for clinical trial data. METHODS: In this paper we empirically test the hypothesis that when these guidelines are implemented in practice, they provide adequate privacy protection to patients. An anonymized clinical study report for a trial on a non-steroidal anti-inflammatory drug that is sold as a prescription eye drop was subjected to re-identification. The target was 500 patients in the USA. Only suspected matches to real identities were reported. RESULTS: Six suspected matches with low confidence scores were identified. Each suspected match took 24.2 h of effort. Social media and death records provided the most useful information for getting the suspected matches. CONCLUSIONS: These results suggest that the anonymization guidance from these agencies can provide adequate privacy protection for patients, and the modes of attack can inform further refinements of the methodologies they recommend in their guidance for manufacturers.


Asunto(s)
Ensayos Clínicos como Asunto/normas , Anonimización de la Información/normas , Registros Electrónicos de Salud/normas , Privacidad , Proyectos de Investigación/normas , Canadá , Registros Electrónicos de Salud/organización & administración , Humanos
9.
BMC Med Inform Decis Mak ; 20(1): 14, 2020 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-32000770

RESUMEN

BACKGROUND: Automated machine-learning systems are able to de-identify electronic medical records, including free-text clinical notes. Use of such systems would greatly boost the amount of data available to researchers, yet their deployment has been limited due to uncertainty about their performance when applied to new datasets. OBJECTIVE: We present practical options for clinical note de-identification, assessing performance of machine learning systems ranging from off-the-shelf to fully customized. METHODS: We implement a state-of-the-art machine learning de-identification system, training and testing on pairs of datasets that match the deployment scenarios. We use clinical notes from two i2b2 competition corpora, the Physionet Gold Standard corpus, and parts of the MIMIC-III dataset. RESULTS: Fully customized systems remove 97-99% of personally identifying information. Performance of off-the-shelf systems varies by dataset, with performance mostly above 90%. Providing a small labeled dataset or large unlabeled dataset allows for fine-tuning that improves performance over off-the-shelf systems. CONCLUSION: Health organizations should be aware of the levels of customization available when selecting a de-identification deployment solution, in order to choose the one that best matches their resources and target performance level.


Asunto(s)
Anonimización de la Información/normas , Registros Electrónicos de Salud , Aprendizaje Automático/normas , Conjuntos de Datos como Asunto , Humanos
10.
Eur J Health Law ; 27(1): 35-57, 2020 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-33652409

RESUMEN

The European General Data Protection Regulation (GDPR) has dotted the i's and crossed the t's in the context of academic medical research. One year into GDPR, it is clear that a change of mind and the uptake of new procedures is required. Research organisations have been looking at the possibility to establish a code-of-conduct, good practices and/or guidelines for researchers that translate GDPR's abstract principles to concrete measures suitable for implementation. We introduce a proposal for the implementation of GDPR in the context of academic research which involves the processing of health related data, as developed by a multidisciplinary team at the University Hospitals Leuven. The proposal is based on three elements, three stages and six specific safeguards. Transparency and pseudonymisation are considered key to find a balance between the need for researchers to collect and analyse personal data and the increasing wish of data subjects for informational control.


Asunto(s)
Investigación Biomédica/legislación & jurisprudencia , Seguridad Computacional/legislación & jurisprudencia , Confidencialidad/legislación & jurisprudencia , Centros Médicos Académicos , Anonimización de la Información/normas , Unión Europea , Hospitales Universitarios , Humanos , Acceso de los Pacientes a los Registros/normas , Investigadores
11.
J Law Health ; 34(1): 30-105, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33449456

RESUMEN

In light of the confusion invited by applying the label "de-identified" to information that can be used to identify patients, it is paramount that regulators, compliance professionals, patient advocates and the general public understand the significant differences between the standards applied by HIPAA and those applied by permissive "de-identification guidelines." This Article discusses those differences in detail. The discussion proceeds in four Parts. Part II (HIPAA's Heartbeat: Why HIPAA Protects Identifiable Patient Information) examines Congress's motivations for defining individually identifiable health information broadly, which included to stop the harms patients endured prior to 1996 arising from the commercial sale of their medical records. Part III (Taking the "I" Out of Identifiable Information: HIPAA's Requirements for De-Identified Health Information) discusses HIPAA's requirements for de-identification that were never intended to create a loophole for identifiable patient information to escape HIPAA's protections. Part IV (Anatomy of a Hack: Methods for Labeling Identifiable information "De-Identified") examines the goals, methods, and results of permissive "de-identification guidelines" and compares them to HIPAA's requirements. Part V (Protecting Un-Protected Health Information) evaluates the suitability of permissive "de-identification guidelines," concluding that the vulnerabilities inherent in their current articulation render them ineffective as a data protection standard. It also discusses ways in which compliance professionals, regulators, and advocates can foster accountability and transparency in the utilization of health information that can be used to identify patients.


Asunto(s)
Confidencialidad/legislación & jurisprudencia , Anonimización de la Información/legislación & jurisprudencia , Anonimización de la Información/normas , Guías como Asunto/normas , Health Insurance Portability and Accountability Act , Información Personal/legislación & jurisprudencia , Femenino , Humanos , Masculino , Estados Unidos
12.
J Law Med Ethics ; 47(2): 213-231, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31298108

RESUMEN

The revised Common Rule includes a new option for the conduct of secondary research with identifiable data and biospecimens: regulatory broad consent. Motivated by concerns regarding autonomy and trust in the research enterprise, regulators had initially proposed broad consent in a manner that would have rendered it the exclusive approach to secondary research with all biospecimens, regardless of identifiability. Based on public comments from both researchers and patients concerned that this approach would hinder important medical advances, however, regulators decided to largely preserve the status quo approach to secondary research with biospecimens and data. The Final Rule therefore allows such research to proceed without specific informed consent in a number of circumstances, but it also offers regulatory broad consent as a new, optional pathway for secondary research with identifiable data and biospecimens. In this article, we describe the parameters of regulatory broad consent under the new rule, explain why researchers and research institutions are unlikely to utilize it, outline recommendations for regulatory broad consent issued by the Secretary's Advisory Committee on Human Research Protections (SACHRP), and sketch an empirical research agenda for the sorts of questions about regulatory broad consent that remain to be answered as the research community embarks on Final Rule implementation.


Asunto(s)
Investigación Biomédica/ética , Investigación Biomédica/legislación & jurisprudencia , Experimentación Humana/legislación & jurisprudencia , Consentimiento Informado/legislación & jurisprudencia , Consentimiento Presumido/legislación & jurisprudencia , Comités Consultivos , Bancos de Muestras Biológicas , Confidencialidad/normas , Anonimización de la Información/normas , Humanos , Información Personal/normas
13.
J Med Internet Res ; 21(5): e13484, 2019 05 31.
Artículo en Inglés | MEDLINE | ID: mdl-31152528

RESUMEN

BACKGROUND: The secondary use of health data is central to biomedical research in the era of data science and precision medicine. National and international initiatives, such as the Global Open Findable, Accessible, Interoperable, and Reusable (GO FAIR) initiative, are supporting this approach in different ways (eg, making the sharing of research data mandatory or improving the legal and ethical frameworks). Preserving patients' privacy is crucial in this context. De-identification and anonymization are the two most common terms used to refer to the technical approaches that protect privacy and facilitate the secondary use of health data. However, it is difficult to find a consensus on the definitions of the concepts or on the reliability of the techniques used to apply them. A comprehensive review is needed to better understand the domain, its capabilities, its challenges, and the ratio of risk between the data subjects' privacy on one side, and the benefit of scientific advances on the other. OBJECTIVE: This work aims at better understanding how the research community comprehends and defines the concepts of de-identification and anonymization. A rich overview should also provide insights into the use and reliability of the methods. Six aspects will be studied: (1) terminology and definitions, (2) backgrounds and places of work of the researchers, (3) reasons for anonymizing or de-identifying health data, (4) limitations of the techniques, (5) legal and ethical aspects, and (6) recommendations of the researchers. METHODS: Based on a scoping review protocol designed a priori, MEDLINE was searched for publications discussing de-identification or anonymization and published between 2007 and 2017. The search was restricted to MEDLINE to focus on the life sciences community. The screening process was performed by two reviewers independently. RESULTS: After searching 7972 records that matched at least one search term, 135 publications were screened and 60 full-text articles were included. (1) Terminology: Definitions of the terms de-identification and anonymization were provided in less than half of the articles (29/60, 48%). When both terms were used (41/60, 68%), their meanings divided the authors into two equal groups (19/60, 32%, each) with opposed views. The remaining articles (3/60, 5%) were equivocal. (2) Backgrounds and locations: Research groups were based predominantly in North America (31/60, 52%) and in the European Union (22/60, 37%). The authors came from 19 different domains; computer science (91/248, 36.7%), biomedical informatics (47/248, 19.0%), and medicine (38/248, 15.3%) were the most prevalent ones. (3) Purpose: The main reason declared for applying these techniques is to facilitate biomedical research. (4) Limitations: Progress is made on specific techniques but, overall, limitations remain numerous. (5) Legal and ethical aspects: Differences exist between nations in the definitions, approaches, and legal practices. (6) Recommendations: The combination of organizational, legal, ethical, and technical approaches is necessary to protect health data. CONCLUSIONS: Interest is growing for privacy-enhancing techniques in the life sciences community. This interest crosses scientific boundaries, involving primarily computer science, biomedical informatics, and medicine. The variability observed in the use of the terms de-identification and anonymization emphasizes the need for clearer definitions as well as for better education and dissemination of information on the subject. The same observation applies to the methods. Several legislations, such as the American Health Insurance Portability and Accountability Act (HIPAA) and the European General Data Protection Regulation (GDPR), regulate the domain. Using the definitions they provide could help address the variable use of these two concepts in the research community.


Asunto(s)
Investigación Biomédica/métodos , Anonimización de la Información/normas , Humanos , Reproducibilidad de los Resultados
14.
Am J Epidemiol ; 188(5): 851-861, 2019 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-30877288

RESUMEN

Methodological advancements in epidemiology, biostatistics, and data science have strengthened the research world's ability to use data captured from electronic health records (EHRs) to address pressing medical questions, but gaps remain. We describe methods investments that are needed to curate EHR data toward research quality and to integrate complementary data sources when EHR data alone are insufficient for research goals. We highlight new methods and directions for improving the integrity of medical evidence generated from pragmatic trials, observational studies, and predictive modeling. We also discuss needed methods contributions to further ease data sharing across multisite EHR data networks. Throughout, we identify opportunities for training and for bolstering collaboration among subject matter experts, methodologists, practicing clinicians, and health system leaders to help ensure that methods problems are identified and resulting advances are translated into mainstream research practice more quickly.


Asunto(s)
Macrodatos , Bioestadística/métodos , Registros Electrónicos de Salud/estadística & datos numéricos , Medicina/estadística & datos numéricos , Salud Pública , Ensayos Clínicos como Asunto/métodos , Investigación sobre la Eficacia Comparativa/métodos , Confidencialidad/normas , Conducta Cooperativa , Exactitud de los Datos , Anonimización de la Información/normas , Métodos Epidemiológicos , Epidemiología/organización & administración , Humanos , Difusión de la Información , Relaciones Interprofesionales , Estudios Multicéntricos como Asunto/métodos , Estudios Multicéntricos como Asunto/normas , Estudios Observacionales como Asunto/métodos , Estudios Retrospectivos , Estados Unidos
15.
Account Res ; 24(8): 483-496, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29140743

RESUMEN

While the anonymization of biological samples and data may help protect participant privacy, there is still debate over whether this alone is a sufficient safeguard to ensure the ethical conduct of research. The purpose of this systematic review is to examine whether the review of an ethics committee is necessary in the context of anonymized research, and what the considerations in said ethics review would be. The review of normative documents issued by both national and international level organizations reveals a growing concern over the ability of anonymization procedures to prevent against reidentification. This is particularly true in the context of genomic research where genetic material's uniquely identifying nature along with advances in technology have complicated previous standards of identifiability. Even where individual identities may not be identifiable, there is the risk of group harm that may not be protected by anonymization alone. We conclude that the majority of normative documents support that the review of an ethics committee is necessary to address the concerns associated with the use of anonymized samples and data for research.


Asunto(s)
Confidencialidad/normas , Anonimización de la Información/normas , Comités de Ética en Investigación/normas , Ética en Investigación , Humanos
17.
Biopreserv Biobank ; 14(3): 224-30, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27219861

RESUMEN

Anonymization is a recognized process by which identifiers can be removed from identifiable data to protect an individual's confidentiality and is used as a standard practice when sharing data in biomedical research. However, a plethora of terms, such as coding, pseudonymization, unlinked, and deidentified, have been and continue to be used, leading to confusion and uncertainty. This article shows that this is a historic problem and argues that such continuing uncertainty regarding the levels of protection given to data risks damaging initiatives designed to assist researchers conducting cross-national studies and sharing data internationally. DataSHIELD and the creation of a legal template are used as examples of initiatives that rely on anonymization, but where the inconsistency in terminology could hinder progress. More broadly, this article argues that there is a real possibility that there could be possible damage to the public's trust in research and the institutions that carry it out by relying on vague notions of the anonymization process. Research participants whose lack of clear understanding of the research process is compensated for by trusting those carrying out the research may have that trust damaged if the level of protection given to their data does not match their expectations. One step toward ensuring understanding between parties would be consistent use of clearly defined terminology used internationally, so that all those involved are clear on the level of identifiability of any particular set of data and, therefore, how that data can be accessed and shared.


Asunto(s)
Seguridad Computacional/normas , Anonimización de la Información/normas , Difusión de la Información/legislación & jurisprudencia , Consenso , Humanos , Terminología como Asunto
18.
Ann Plast Surg ; 76(6): 611-4, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27015333

RESUMEN

IMPORTANCE: This work was performed to advance patient care by protecting patient anonymity. OBJECTIVES: This study aimed to analyze the current practices used in patient facial photograph deidentification and set forth standardized guidelines for improving patient autonomy that are congruent with medical ethics and Health Insurance Portability and Accountability Act. DESIGN: The anonymization guidelines of 13 respected journals were reviewed for adequacy in accordance to facial recognition literature. Simple statistics were used to compare the usage of the most common concealment techniques in 8 medical journals which may publish the most facial photographs. SETTING: Not applicable. PARTICIPANTS: Not applicable. MAIN OUTCOME MEASURES: Facial photo deidentification guidelines of 13 journals were ascertained. Number and percentage of patient photographs lacking adequate anonymization in 8 journals were determined. RESULTS: Facial image anonymization guidelines varied across journals. When anonymization was attempted, 87% of the images were inadequately concealed. The most common technique used was masking the eyes alone with a black box. CONCLUSIONS: Most journals evaluated lack specific instructions for properly de-identifying facial photographs. The guidelines introduced here stress that both eyebrows and eyes must be concealed to ensure patient privacy. Examples of proper and inadequate photo anonymization techniques are provided. RELEVANCE: Improving patient care by ensuring greater patient anonymity.


Asunto(s)
Anonimización de la Información/normas , Políticas Editoriales , Guías como Asunto/normas , Fotograbar , Anonimización de la Información/ética , Anonimización de la Información/legislación & jurisprudencia , Cara , Health Insurance Portability and Accountability Act , Humanos , Autonomía Personal , Estados Unidos
20.
Int J Health Geogr ; 15: 1, 2016 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-26739310

RESUMEN

BACKGROUND: Anonymisation of spatially referenced data has received increasing attention in recent years. Whereas the research focus has been on the anonymisation of point locations, the disclosure risk arising from the publishing of inter-point distances and corresponding anonymisation methods have not been studied systematically. METHODS: We propose a new anonymisation method for the release of geographical distances between records of a microdata file--for example patients in a medical database. We discuss a data release scheme in which microdata without coordinates and an additional distance matrix between the corresponding rows of the microdata set are released. In contrast to most other approaches this method preserves small distances better than larger distances. The distances are modified by a variant of Lipschitz embedding. RESULTS: The effects of the embedding parameters on the risk of data disclosure are evaluated by linkage experiments using simulated data. The results indicate small disclosure risks for appropriate embedding parameters. CONCLUSION: The proposed method is useful if published distance information might be misused for the re-identification of records. The method can be used for publishing scientific-use-files and as an additional tool for record-linkage studies.


Asunto(s)
Anonimización de la Información/normas , Bases de Datos Factuales/normas , Información Personal/tendencias , Posición Específica de Matrices de Puntuación , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Información Personal/estadística & datos numéricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...